Solving MIPs via scaling-based augmentation
نویسندگان
چکیده
منابع مشابه
Information Filtering via a Scaling-Based Function
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average de...
متن کاملDirectionality Effects via Distance-based Penalty Scaling
In this paper we develop a new proposal for distance-based penalty scaling in Optimality-theoretic analyses, including Harmonic Grammar. We apply this technique to the analysis of two phonological phenomena, both of which have posed challenges to implementation using standard constraint-based methods: directionality effects and bounded domain windows. We show that distance-based penalty scaling...
متن کاملScaling Up: Solving POMDPs through Value Based Clustering
Partially Observable Markov Decision Processes (POMDPs) provide an appropriately rich model for agents operating under partial knowledge of the environment. Since finding an optimal POMDP policy is intractable, approximation techniques have been a main focus of research, among them point-based algorithms, which scale up relatively well up to thousands of states. An important decision in a point...
متن کاملSolving Linear Systems with Randomized Augmentation
Our randomized preprocessing of a matrix by means of augmentation counters its degeneracy and ill conditioning, uses neither pivoting nor orthogonalization, readily preserves matrix structure and sparseness, and leads to dramatic speedup of the solution of general and structured linear systems of equations in terms of both estimated arithmetic time and observed CPU time.
متن کاملSolving Linear Systems with Randomized Augmentation II
With a high probablilty our randomized augmentation of a matrix eliminates its rank deficiency and ill conditioning. Our techniques avoid various drawbacks of the customary algorithms based on pivoting and orthogonalization, e.g., we readily preserve matrix structure and sparseness. Furthermore our randomized augmentation is expected to precondition quite a general class of ill conditioned inpu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Discrete Optimization
سال: 2018
ISSN: 1572-5286
DOI: 10.1016/j.disopt.2017.08.004